Over the decades, Hollywood has depicted artificial intelligence (AI) in multiple unsettling ways. In their futuristic settings, the AI begins to think for itself, outsmarts the humans, and overthrows society. The resulting dark world is left in a constant barrage of storms – metaphorically and meteorologically. (It’s always so gloomy and rainy in those movies.)
AI has been a part of manufacturing, shipping, and other industries for several years now. But the emergence of mainstream AI in daily life is stirring debates about its use. Content, art, video, and voice generation tools can make you write like Shakespeare, look like Tom Cruise, or create digital masterpieces in the style of Van Gogh. While it starts out as fun and games, an overreliance or misuse of AI can quickly turn shortcuts into irresponsibly cut corners and pranks into malicious impersonations.
It’s imperative that everyone interact responsibly with mainstream AI tools like ChatGPT, Bard, Craiyon, and Voice.ai, among others, to avoid these three real dangers of AI that you’re most likely to encounter.
1. AI Hallucinations
The cool thing about AI is it has advanced to the point where it does think for itself. It’s constantly learning and forming new patterns. The more questions you ask it, the more data it collects and the “smarter” it gets. However, when you ask ChatGPT a question it doesn’t know the answer to, it doesn’t admit that it doesn’t know. Instead, it’ll make up an answer like a precocious schoolchild. This phenomenon is known as an AI hallucination.
One prime example of an AI hallucination occurred in a New York courtroom. A lawyer presented a lengthy brief that cited multiple law cases to back his point. It turns out the lawyer used ChatGPT to write the entire brief and he didn’t fact check the AI’s work. ChatGPT fabricated its supporting citations, none of which existed.1
AI hallucinations could become a threat to society in that it could populate the internet with false information. Researchers and writers have a duty to thoroughly doublecheck any work they outsource to text generation tools like ChatGPT. When a trustworthy online source publishes content and asserts it as the unbiased truth, readers should be able to trust that the publisher isn’t leading them astray.
2. Deepfake, AI Art, and Fake News
We all know that you can’t trust everything you read on the internet. Deepfake and AI-generated art deepen the mistrust. Now, you can’t trust everything you see on the internet.
Deepfake is the digital manipulation of a photo or video to portray an event that never happened or portray a person doing or saying something they never did or said. AI art creates new images using a compilation of published works on the internet to fulfill the prompt.
Deepfake and AI art become a danger to the public when people use them to supplement fake news reports. Individuals and organizations who feel strongly about their side of an issue may shunt integrity to the side to win new followers to their cause. Fake news is often incendiary and in extreme cases can cause unrest.
Before you share a “news” article with your social media following or shout about it to others, do some additional research to ensure its accuracy. Additionally, scrutinize the video or image accompanying the story. A deepfake gives itself away when facial expressions or hand gestures don’t look quite right. Also, the face may distort if the hands get too close to it. To spot AI art, think carefully about the context. Is it too fantastic or terrible to be true? Check out the shadows, shading, and the background setting for anomalies.
3. AI Voice Scams
An emerging dangerous use of AI is cropping up in AI voice scams. Phishers have attempted to get people’s personal details and gain financially over the phone for decades. But now with the help of AI voice tools, their scams are entering a whole new dimension of believability.
With as little as three seconds of genuine audio, AI voice generators can mimic someone’s voice with up to 95% accuracy. While AI voice generators may add some humor to a comedy deepfake video, criminals are using the technology to seriously frighten people and scam them out of money at the same time. The criminal will impersonate someone using their voice and call the real person’s loved one, saying they’ve been robbed or sustained an accident. McAfee’s Beware the Artificial Imposter report discovered that 77% of people targeted by an AI voice scam lost money as a result. Seven percent of people lost as much as $5,000 to $15,000.
While technology can be used for malicious purposes, it can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.
Use AI Responsibly
Google’s code of conduct states “Don’t be evil.”2 Because AI relies on input from humans, we have the power to make AI as benevolent or as malevolent as we are. There’s a certain amount of trust involved in the engineers who hold the future of the technology – and if Hollywood is to be believed, the fate of humanity – in their deft hands and brilliant minds.
“60 Minutes” likened AI’s influence on society on a tier with fire, agriculture, and electricity.3 Because AI never has to take a break, it can learn and teach itself new things every second of every day. It’s advancing quickly and some of the written and visual art it creates can result in some touching expressions of humanity. But AI doesn’t quite understand the emotion it portrays. It’s simply a game of making patterns. Is AI – especially its use in creative pursuits – dimming the spark of humanity? That remains to be seen.
When used responsibly and in moderation in daily life, it may make us more efficient and inspire us to think in new ways. Be on the lookout for the dangers of AI and use this amazing technology for good.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2Alphabet, “Google Code of Conduct”
360 Minutes, “Artificial Intelligence Revolution”